Towards Trustworthy Intelligent Robots - A Pragmatic Approach to Moral Responsibility
نویسندگان
چکیده
Today’s robots are used mainly as advanced tools and do not have any capability of taking moral responsibility. However, autonomous, learning intelligent systems are developing rapidly, resulting in a new division of tasks between humans and robots. The biggest worry about autonomous intelligent systems seems to be the fear of human loss of control and robots running amok. We argue that for all practical purposes, moral responsibility in autonomous intelligent system is best handled as a regulatory mechanism, with the aim to assure desirable behavior. “Responsibility” can thus be ascribed an intelligent artifact in much the same way as (artificial) “intelligence”. We simply expect a (morally) responsible artificial intelligent agent to behave in a way that is traditionally thought to require human (moral) responsibility. Technological artifacts are always a part of a broader socio-technological system with distributed responsibilities. The development of autonomous learning morally responsible intelligent systems must consequently rely on several responsibility feedback loops; the awareness and preparedness for handling risks on the side of designers, producers, implementers, users and maintenance personnel as well as the support of the society at large which will provide a feedback on the consequences of the use of robots, back to designers and producers. This complex system of shared responsibilities should secure safe functioning of the distributed responsibility systems including autonomous (morally) responsible intelligent robots (softbots).
منابع مشابه
Sharing Moral Responsibility with Robots: A Pragmatic Approach
Roboethics is a recently developed field of applied ethics which deals with the ethical aspects of technologies such as robots, ambient intelligence, direct neural interfaces and invasive nano-devices and intelligent soft bots. In this article we look specifically at the issue of (moral) responsibility in artificial intelligent systems. We argue for a pragmatic approach, where responsibility is...
متن کاملTo delegate or not to delegate: Care robots, moral agency and moral responsibility
The use of robots in healthcare is on the rise, from robots to assist with lifting, bathing and feeding, to robots used for social companionship. Given that the tradition and professionalization of medicine and nursing has been grounded on the fact that care providers can assume moral responsibility for the outcome of medical interventions, we must ask whether or not a robot can assume moral re...
متن کاملA New Intelligent Approach to Patient-cooperative Control of Rehabilitation Robots
This paper presents a new intelligent method to control rehabilitation robots by mainly considering reactions of patient instead of doing a repetitive preprogrammed movement. It generates a general reference trajectory based on different reactions of patient during therapy. Three main reactions has been identified and included in reference trajectory: small variations, force shocks in a single ...
متن کاملRobot as Moral Agent: A Philosophical and Empirical Approach
What is necessary for robots to coexist with human beings? In order to do so, we suppose, robots must be moral agents. To be a moral agent is to bear its own responsibility which others cannot take for it. We will argue that such an irreplaceability consists in its having an inner world — one which others cannot directly experience, just as pleasure and pain. And personality of a moral agent, w...
متن کاملOutline of a sensory-motor perspective on intrinsically moral agents
We propose that moral behavior of artificial agents could (and should) be intrinsically grounded in their own sensorymotor experiences. Such an ability depends critically on seven types of competences. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their in...
متن کامل